33 research outputs found
-Penalization in Functional Linear Regression with Subgaussian Design
We study functional regression with random subgaussian design and real-valued
response. The focus is on the problems in which the regression function can be
well approximated by a functional linear model with the slope function being
"sparse" in the sense that it can be represented as a sum of a small number of
well separated "spikes". This can be viewed as an extension of now classical
sparse estimation problems to the case of infinite dictionaries. We study an
estimator of the regression function based on penalized empirical risk
minimization with quadratic loss and the complexity penalty defined in terms of
-norm (a continuous version of LASSO). The main goal is to introduce
several important parameters characterizing sparsity in this class of problems
and to prove sharp oracle inequalities showing how the -error of the
continuous LASSO estimator depends on the underlying sparsity of the problem
Efficient median of means estimator
The goal of this note is to present a modification of the popular median of
means estimator that achieves sub-Gaussian deviation bounds with nearly optimal
constants under minimal assumptions on the underlying distribution. We build on
a recent work on the topic by the author, and prove that desired guarantees can
be attained under weaker requirements
Asymptotic normality of robust risk minimizers
This paper investigates asymptotic properties of a class of algorithms that
can be viewed as robust analogues of the classical empirical risk minimization.
These strategies are based on replacing the usual empirical average by a robust
proxy of the mean, such as the (version of) the median-of-means estimator. It
is well known by now that the excess risk of resulting estimators often
converges to 0 at optimal rates under much weaker assumptions than those
required by their "classical" counterparts. However, much less is known about
the asymptotic properties of the estimators themselves, for instance, whether
robust analogues of the maximum likelihood estimators are asymptotically
efficient. We make a step towards answering these questions and show that for a
wide class of parametric problems, minimizers of the appropriately defined
robust proxy of the risk converge to the minimizers of the true risk at the
same rate, and often have the same asymptotic variance, as the estimators
obtained by minimizing the usual empirical risk. Moreover, our results show
that robust algorithms based on the so-called "min-max" type procedures in many
cases provably outperform, is the asymptotic sense, algorithms based on direct
risk minimization